翻訳と辞書
Words near each other
・ Kerowlee
・ KERP
・ Kerpen
・ Kerpen (disambiguation)
・ Kerpen (surname)
・ Kerpen Formation
・ Kerpen, Rhineland-Palatinate
・ Kerpenyes
・ Kerpert
・ Kerpikend
・ Kerpini
・ Kerpiçköy, Haymana
・ Kerplunk
・ Kerplunk (album)
・ KerPlunk (game)
Kernel methods for vector output
・ Kernel Normal Form
・ Kernel panic
・ Kernel patch
・ Kernel Patch Protection
・ Kernel perceptron
・ Kernel preemption
・ Kernel principal component analysis
・ Kernel random forest
・ Kernel regression
・ Kernel relocation
・ Kernel same-page merging
・ Kernel Scheduled Entities
・ Kernel smoother
・ Kernel Transaction Manager


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Kernel methods for vector output : ウィキペディア英語版
Kernel methods for vector output
Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity.
In typical machine learning algorithms, these functions produce a scalar output. Recent development of kernel methods for functions with vector-valued output is due, at least in part, to interest in simultaneously solving related problems. Kernels which capture the relationship between the problems allow them to ''borrow strength'' from each other. Algorithms of this type include multi-task learning (also called multi-output learning or vector-valued learning), transfer learning, and co-kriging. Multi-label classification can be interpreted as mapping inputs to (binary) coding vectors with length equal to the number of classes.
In Gaussian processes, kernels are called covariance functions. Multiple-output functions correspond to considering multiple processes. See Bayesian interpretation of regularization for the connection between the two perspectives.
==History==
The history of learning vector-valued functions is closely linked to transfer learning, a broad term that refers to systems that learn by transferring knowledge between different domains. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn,” which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning.〔S.J. Pan and Q. Yang, "A survey on transfer learning," IEEE Transactions on Knowledge and Data Engineering, 22, 2010〕 Interest in learning vector-valued functions was particularly sparked by multitask learning, a framework which tries to learn multiple, possibly different tasks simultaneously.
Much of the initial research in multitask learning in the machine learning community was algorithmic in nature, and applied to methods such as neural networks, decision trees and -nearest neighbors in the 1990s.〔Rich Caruana, "Multitask Learning," Machine Learning, 41–76, 1997〕 The use of probabilistic models and Gaussian processes was pioneered and largely developed in the context of geostatistics, where prediction over vector-valued output data is known as cokriging.〔J. Ver Hoef and R. Barry, "Constructing and fitting models for cokriging and multivariable spatial prediction," Journal of Statistical Planning and Inference, 69:275–294, 1998〕〔P. Goovaerts, "Geostatistics for Natural Resources Evaluation," Oxford University Press, USA, 1997〕〔N. Cressie "Statistics for Spatial Data," John Wiley & Sons Inc. (Revised Edition), USA, 1993〕 Geostatistical approaches to multivariate modeling are mostly formulated around the linear model of coregionalization (LMC), a generative approach for developing valid covariance functions that has been used for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. The regularization and kernel theory literature for vector-valued functions followed in the 2000s.〔C.A. Micchelli and M. Pontil, "On learning vector-valued functions," Neural Computation, 17:177–204, 2005〕〔C. Carmeli et al., "Vector valued reproducing kernel hilbert spaces of integrable functions and mercer theorem," Anal. Appl. (Singap.), 4〕 While the Bayesian and regularization perspectives were developed independently, they are in fact closely related.〔Mauricio A. Álvarez, Lorenzo Rosasco, and Neil D. Lawrence, "Kernels for Vector-Valued Functions: A Review," Foundations and Trends® in Machine Learning 4, no. 3 (2012): 195–266. doi: 10.1561/2200000036 (arXiv:1106.6251 )〕

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Kernel methods for vector output」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.